Python for Data Analysis by Wes McKinney

Python for Data Analysis by Wes McKinney

Author:Wes McKinney [Wes McKinney]
Language: eng
Format: epub, mobi, pdf
Tags: COMPUTERS / Programming Languages / Python
ISBN: 9781449319786
Publisher: O'Reilly Media
Published: 2012-10-07T16:00:00+00:00


Example: Group Weighted Average and Correlation

Under the split-apply-combine paradigm of groupby, operations between columns in a DataFrame or two Series, such a group weighted average, become a routine affair. As an example, take this dataset containing group keys, values, and some weights:

In [127]: df = DataFrame({'category': ['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b'], .....: 'data': np.random.randn(8), .....: 'weights': np.random.rand(8)}) In [128]: df Out[128]: category data weights 0 a 1.561587 0.957515 1 a 1.219984 0.347267 2 a -0.482239 0.581362 3 a 0.315667 0.217091 4 b -0.047852 0.894406 5 b -0.454145 0.918564 6 b -0.556774 0.277825 7 b 0.253321 0.955905

The group weighted average by category would then be:

In [129]: grouped = df.groupby('category') In [130]: get_wavg = lambda g: np.average(g['data'], weights=g['weights']) In [131]: grouped.apply(get_wavg) Out[131]: category a 0.811643 b -0.122262

As a less trivial example, consider a data set from Yahoo! Finance containing end of day prices for a few stocks and the S&P 500 index (the SPX ticker):

In [132]: close_px = pd.read_csv('ch09/stock_px.csv', parse_dates=True, index_col=0) In [133]: close_px Out[133]: <class 'pandas.core.frame.DataFrame'> DatetimeIndex: 2214 entries, 2003-01-02 00:00:00 to 2011-10-14 00:00:00 Data columns: AAPL 2214 non-null values MSFT 2214 non-null values XOM 2214 non-null values SPX 2214 non-null values dtypes: float64(4) In [134]: close_px[-4:] Out[134]: AAPL MSFT XOM SPX 2011-10-11 400.29 27.00 76.27 1195.54 2011-10-12 402.19 26.96 77.16 1207.25 2011-10-13 408.43 27.18 76.37 1203.66 2011-10-14 422.00 27.27 78.11 1224.58

One task of interest might be to compute a DataFrame consisting of the yearly correlations of daily returns (computed from percent changes) with SPX. Here is one way to do it:

In [135]: rets = close_px.pct_change().dropna() In [136]: spx_corr = lambda x: x.corrwith(x['SPX']) In [137]: by_year = rets.groupby(lambda x: x.year) In [138]: by_year.apply(spx_corr) Out[138]: AAPL MSFT XOM SPX 2003 0.541124 0.745174 0.661265 1 2004 0.374283 0.588531 0.557742 1 2005 0.467540 0.562374 0.631010 1 2006 0.428267 0.406126 0.518514 1 2007 0.508118 0.658770 0.786264 1 2008 0.681434 0.804626 0.828303 1 2009 0.707103 0.654902 0.797921 1 2010 0.710105 0.730118 0.839057 1 2011 0.691931 0.800996 0.859975 1

There is, of course, nothing to stop you from computing inter-column correlations:

# Annual correlation of Apple with Microsoft In [139]: by_year.apply(lambda g: g['AAPL'].corr(g['MSFT'])) Out[139]: 2003 0.480868 2004 0.259024 2005 0.300093 2006 0.161735 2007 0.417738 2008 0.611901 2009 0.432738 2010 0.571946 2011 0.581987



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.